17 research outputs found

    Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars

    Full text link
    The discovery of neural architectures from simple building blocks is a long-standing goal of Neural Architecture Search (NAS). Hierarchical search spaces are a promising step towards this goal but lack a unifying search space design framework and typically only search over some limited aspect of architectures. In this work, we introduce a unifying search space design framework based on context-free grammars that can naturally and compactly generate expressive hierarchical search spaces that are 100s of orders of magnitude larger than common spaces from the literature. By enhancing and using their properties, we effectively enable search over the complete architecture and can foster regularity. Further, we propose an efficient hierarchical kernel design for a Bayesian Optimization search strategy to efficiently search over such huge spaces. We demonstrate the versatility of our search space design framework and show that our search strategy can be superior to existing NAS approaches. Code is available at https://github.com/automl/hierarchical_nas_construction

    PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning

    Full text link
    Hyperparameters of Deep Learning (DL) pipelines are crucial for their downstream performance. While a large number of methods for Hyperparameter Optimization (HPO) have been developed, their incurred costs are often untenable for modern DL. Consequently, manual experimentation is still the most prevalent approach to optimize hyperparameters, relying on the researcher's intuition, domain knowledge, and cheap preliminary explorations. To resolve this misalignment between HPO algorithms and DL researchers, we propose PriorBand, an HPO algorithm tailored to DL, able to utilize both expert beliefs and cheap proxy tasks. Empirically, we demonstrate PriorBand's efficiency across a range of DL benchmarks and show its gains under informative expert input and robustness against poor expert belief

    Fiber Reinforced Composite Cores and Panels

    Get PDF
    A fiber reinforced core panel is formed from strips of plastics foam helically wound with layers of rovings to form webs which may extend in a wave pattern or may intersect transverse webs. Hollow tubes may replace foam strips. Axial rovings cooperate with overlying helically wound rovings to form a beam or a column. Wound roving patterns may vary along strips for structural efficiency. Wound strips may alternate with spaced strips, and spacers between the strips enhance web buckling strength. Continuously wound rovings between spaced strips permit folding to form panels with reinforced edges. Continuously wound strips are helically wrapped to form annular structures, and composite panels may combine both thermoset and thermoplastic resins. Continuously wound strips or strip sections may be continuously fed either longitudinally or laterally into molding apparatus which may receive skin materials to form reinforced composite panels

    Contingent Valuation and Social Choice

    Get PDF
    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. How can you measure the net benefits to society from actions that impact environmental resources? An economist's answer is to employ Hicksian consumer surplus, determining the equivalent variation in income that leaves each consumer indifferent to the action. When consumers are rational and consumer surplus can be measured reliably from market demand functions, this is a satisfactory basis for welfare calculation, subject to the customary caveats about distributional equity and consistency if compensation is not actually paid. When externalities, public goods, or informational asymmetries interfere with the determination of consumer surplus from market demand functions, one can try to set up a hypothetical market to elicit an individual's equivalent variation, or willingness-to-pay (WTP). This is called the contingent valuation method (CVM). The approach elicits stated preferences from a sample of consumers using either openended questions that ask directly for WTP, or referendum (closed-ended) questions that present a bid or a sequence of bids to the consumer, and ask for a yes or no vote on whether each bid exceeds the subject's WTP. A single referendum experiment presents only one bid; a double referendum experiment presents a second bid that is conditioned on the subject's response to the first bid, lower if the first response is no and higher if it is yes. Agricultural & Applied Economics Association and Oxford University Press An extensive literature has investigated the use of CVM to value environmental goods, and in recent years has promoted it for evaluation of goods such as endangered species and wilderness areas whose value comes primarily from existence rather than active use.' The typical CVM experiment in environmental economics asks about a single commodity, often with a fairly abbreviated or stylized description that assumes the consumer can draw upon prior knowledge. Typically, there is no training of the consumer to reduce inconsistent (e.g. In assessing CVM, there are three commonsense questions that can be asked: (a) Is the method psychometrically robust, in that results cannot be altered substantively by changes in survey format, questionnaire design, and instructions that should be inconsequential when behavior is driven by maximization of rational preferences? (b) Is the method statistically reliable, in that the distribution of WTP can be estimated with acceptable precision using practical sample sizes? Reliability is a particular issue if CV surveys produce extreme responses with some probability, perhaps due to strategic misrepresentation. (c) Is the method economically sensible, in that the individual preferences measured by CVM are consistent with the logical requirements of rationality (e.g., transitivity), and at least broadly consistent with sensible features of economic preferences (e.g., plausible budget shares and income elasticities)? CVM might fail to meet these criteria because respondents receive incomplete information on the consequences of the available choices, or are given inadequate incentives to be truthful and avoid strategic misrepresentation, or because the experimental design is not sufficiently rich to detect and compensate for systematic and random response errors. Beyond such technical problems, there could be a fundamental failure of CVM if consumers do not have stable, classical preferences for the class of commodities, so that the foundations of Hicksian welfare analysis break down. Intuitively, the further removed a class of commodities from market goods where the consumer has the experience of repeated choices and the discipline of market forces, the greater the possibility of both technical and fundamental failures. The broad sweep of evidence from market research, cognitive psychology, and experimental economics suggests that the existence value of natural resources, involving very complex commodities that are far outside consumers' market experience, will be vulnerable to these failures (McFadden 1986). The following sections discuss, in turn, a series of statistical issues in analyzing WTP data, parametric methods for estimating mean WTP, an experiment that was designed to detect and quantify technical failures of CVM, and the results from the experiment. Using referendum questions complicates matters only slightly, since votes at a sufficiently broad and closely spaced range of bid levels can be used to estimate directly the distribution of WTP, and this in turn can be used to estimate the population mean. This claim is proved in McFadden (1994), which gives practical nonparametric estimators, and describes the restrictions necessary on referendum experimental design for these estimators to have good largesample properties. In overview, the result is that with truthful referendum data there are estimators whose mean square error is inversely proportional to sample size, provided the experimental design "undersmooths" by taking a relatively large number of bid levels, with relatively small samples at each bid.2 For example, when WTP is restricted a priori to a finite interval, one could distribute the bids evenly over this interval, with one respondent at each level. The common practice in CV referendum studies of taking a relatively small number of bid levels leads to estimators whose mean square errors decline more slowly with sample size. Statistical Issues in CV Data 2 When the support of the WTP distribution is not finite, additional restrictions on tail behavior are needed to assure the existence of mean WTP and the stated rate of convergence of nonparametric estimators

    An S/MAR-based L1 retrotransposition cassette mediates sustained levels of insertional mutagenesis without suffering from epigenetic silencing of DNA methylation

    No full text
    L1 is an insertional mutagen that is capable of mediating permanent gene disruption in mammalian genomes. However, currently available L1 retrotransposition vectors exhibit low or unstable transgene expression when expressed in somatic cells and tissues. This restriction limits their potential utility in long-term screening procedures or somatic mutagenesis applications. In this study, we addressed this problem by developing a minicircle, nonviral L1 retrotransposition vector using a scaffold/matrix attachment region (S/MAR) in the vector backbone and evaluated its utility in human cell lines. The S/MAR-based L1 retrotransposition vector provides stable, elevated levels of L1 expression compared to the currently used EBNA1-based L1 vector. In addition, the S/MAR elements effectively mediate sustained levels of L1 retrotransposition in prolonged cell culturing without suffering from epigenetic silencing by DNA methylation or from vector integration problems even in the absence of selection pressure. These findings indicate that the simple inclusion of S/MAR in the vector backbone increased levels of L1 expression and retrotransposition that can be used as an effective tool to generate insertional mutagenesis in large-scale somatic mutagenesis applications in mammalian cells

    πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

    No full text
    Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose PiBO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, PiBO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when PiBO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that PiBO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 time-to-accuracy speedup over prominent BO approaches

    Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019

    Get PDF
    International audienceThis paper reports the results and post-challenge analyses of ChaLearn’s AutoDL challenge series, which helped sorting out a profusion of AutoML solutions for Deep Learning (DL) that had been introduced in a variety of settings, but lacked fair comparisons. All input data modalities (time series, images, videos, text, tabular) were formatted as tensors and all tasks were multi-label classification problems. Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly. In this setting, DL methods dominated, though popular Neural Architecture Search (NAS) was impractical. Solutions relied on fine-tuned pre-trained networks, with architectures matching data modality. Post-challenge tests did not reveal improvements beyond the imposed time limit. While no component is particularly original or novel, a high level modular organization emerged featuring a “meta-learner”, “data ingestor”, “model selector”, “model/learner”, and “evaluator”. This modularity enabled ablation studies, which revealed the importance of (off-platform) meta-learning, ensembling, and efficient data management. Experiments on heterogeneous module combinations further confirm the (local) optimality of the winning solutions. Ourchallenge legacy includes an ever-lasting benchmark (http://autodl.chalearn.org), the open-sourced code of the winners, and a free “AutoDL self-service”
    corecore